Stochastic viscosity approximations of Hamilton–Jacobi equations and variance reduction

نویسندگان

چکیده

We consider the computation of free energy-like quantities for diffusions when resorting to Monte Carlo simulation is necessary, instance in high dimension. Such stochastic computations typically suffer from variance, particular a low noise regime, because expectation dominated by rare trajectories which observable reaches large values. Although importance sampling, or tilting trajectories, now standard technique reducing variance such estimators, quantitative criteria proving that given control reduces are scarce, and often do not apply practical situations. The goal this work provide criterion assessing whether bias at scale. rely on recently introduced notion solution Hamilton–Jacobi–Bellman (HJB) equations. Based tool, we introduce k -stochastic viscosity approximation (SVA) HJB equation. next prove approximate solutions associated with estimators having relative order − 1 log-scale. In particular, sampling scheme built 1-SVA has bounded as goes zero. Finally, show our definition relevant, examples approximations one two, numerical illustration confirming theoretical findings.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Variance reduction in sample approximations of stochastic programs

This paper studies the use of randomized Quasi-Monte Carlo methods (RQMC) in sample approximations of stochastic programs. In high dimensional numerical integration, RQMC methods often substantially reduce the variance of sample approximations compared to MC. It seems thus natural to use RQMC methods in sample approximations of stochastic programs. It is shown, that RQMC methods produce epi-con...

متن کامل

Gaussian Process Approximations of Stochastic Differential Equations

Stochastic differential equations arise naturally in a range of contexts, from financial to environmental modeling. Current solution methods are limited in their representation of the posterior process in the presence of data. In this work, we present a novel Gaussian process approximation to the posterior measure over paths for a general class of stochastic differential equations in the presen...

متن کامل

A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the govern...

متن کامل

Stochastic Variance Reduction for Policy Gradient Estimation

Recent advances in policy gradient methods and deep learning have demonstrated their applicability for complex reinforcement learning problems. However, the variance of the performance gradient estimates obtained from the simulation is often excessive, leading to poor sample efficiency. In this paper, we apply the stochastic variance reduced gradient descent (SVRG) technique [1] to model-free p...

متن کامل

Online Variance Reduction for Stochastic Optimization

Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data. This might degrade the convergence by yielding estimates that suffer from a high variance. A possible remedy is to employ non-uniform importance sampling techniques, which take the structure of the dataset into account. In this work, we investigate a recently pr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: ESAIM

سال: 2023

ISSN: ['1270-900X']

DOI: https://doi.org/10.1051/m2an/2023042